Thursday, June 15, 2017
Ruxandra Marinescu-Ghemeci (University of Bucharest)
New coloring problems in graphs
Abstract: Various types of colorings for vertices and edges of a graph were introduced
motivated by interference problems in communication networks, such as radio colorings. But,
in order to solve interference problems, it can be sufficient to ensure that between every
pair of nodes there is at least one path on which communication can be made without interference.
In terms of graphs, this means that any two vertices are connected through a colored path
satisfying certain constraints. The most known types of connectivity through colored paths
are rainbow connectivity and proper connectivity. This talk aims to briefly present radio
colorings and existing types of chromatic connectivity and to provide preliminary results
on radio connectivity - that is connectivity through radio colored paths.
Also, some complexity results on proper connectivity (which is actually radio connectivity
for one level of interference) of digraphs are presented, a joint work with Guillaume
Ducoffe and Alexandru Popa.
Thursday, June 8, 2017
Irina Guțanu (University of Bucharest)
Random Graphs and Zero-One Laws
Abstract:The theory of random graphs lies at the intersection of graph theory and probability theory,
and represents an important result in determining the functionality of complex networks, more precisely in
realising the random network model. In this presentation we will introduce the Erdös-Rényi random graph model,
as well as the Ehrenfeucht–Fraïssé game, the Zero-One laws and the evolution of network
theory from the random network model to the Watts-Strogatz model.
Thursday, May 25, 2017
Andrei Sipoș (IMAR & University of Bucharest)
Proof mining in convex optimization II: the uniform case
Abstract:We show how each instance of the Proximal Point Algorithm has as an input case
a "uniform case", which can be abstracted into the definition presented last time. This case
is of a special interest because the algorithm is then constrained to converge (strongly!)
to a single possible value. This uniqueness can be quantified into a "modulus of uniqueness"
(not necessarily explicitly), and then, using a technique pioneered by Eyvind Briseid, one
can extract full rates of convergence for this family of algorithms even in the presence of
the law of excluded middle. Alongside these explicit proofs, we shall present both a general
motivation for proof mining and the schema of application which pertains to the case at hand.
These results are joint work with Laurențiu Leuștean and Adriana Nicolae.
Thursday, May 18, 2017
Andrei Sipoș (IMAR & University of Bucharest)
Ionuț Ţuțu (Royal Holloway, University of London)
Mircea Dumitru (University of Bucharest)
Mihai Prunescu (IMAR)
Iuliana Iatan (Technical University of Civil Engineering Bucharest)
Mircea Dumitru (University of Bucharest)
Alex Popa (University of Bucharest)
Guillaume Ducoffe (Université Côte d’Azur, Inria, CNRS, I3S)
Adrian Dușa
(University of Bucharest)
Alexandru Dragomir (University of Bucharest)
Alexandru Dragomir (University of Bucharest)
Roberto Giuntini (University of Cagliari)
Mihai Prunescu (IMAR)
Mihai Prunescu (IMAR)
Adrian Dușa
(University of Bucharest)
Gabriel Sandu
(University of Helsinki & ICUB)
Virgil Şerbănuţă
(Google, Zürich)
Andrei Sipoș (IMAR & University of Bucharest)
Proof mining in convex optimization
Abstract:We present another instance of Kohlenbach's program of proof mining (i.e. obtaining quantitative results out of proofs which are not fully
constructive) in the field of convex optimization.
The concept known as the proximal point algorithm is in actuality a
class of similar algorithms used in that area to compute various
extrema such as zeroes of monotone operators, critical points of
convex functions or fixed points of nonexpansive mappings. We present
general definitions into which all such algorithms can be subsumed,
regardless of what the original method for proving their convergence
had been, as well as quantitative information obtained in an abstract
way for each tractable subclass.
These results are joint work with Laurențiu Leuștean and Adriana Nicolae.
Thursday, May 4, 2017
A logic-programming approach to graph transformation
Abstract: The use of graphs and of rule-based graph transformations for the specification and analysis
of software systems has a long tradition in computer science. Its focus is on those systems whose states can
be formalized as graph-like structures, and whose evolution can be presented by means of discrete graph
reconfigurations. In this talk, we revisit several algebraic approaches to graph transformation from
the vantage point of the theory of institutions, and we show how graph-transformation frameworks
can be described in the context of substitution systems. This sets the foundation for conducting
research into a graph-transformation variant of logic programming that would uniformly accommodate,
for the first time, both the traditional, well-established operational semantics of graph transformation
and the more recent denotational semantics of the paradigm, which is still in its very early stages of development.
Thursday, April 27, 2017
Theories of Compositionality. Kit Fine's Semantic Relationism II
Thursday, April 6, 2017
An application of p-adic norms in geometry
Abstract: All $p$-adic norms can be extended to the field of the reals in various ways.
This fact has applications in elementary geometry, as first time shown by Monsky in 1970.
Thursday, March 30, 2017
Predicting Human Personality from Social Media using a Fuzzy Neural Network
Abstract: In this work we propose a specific neural network for predicting personality,
by special type of Fuzzy Gaussian Neural Network (FGNN) understanding that it has so special
the connections (between the second and third layers) and the operations with the nodes, too.
We shall propose to apply the FGNN for predicting a users' Big Five personality traits
(the five factor model of personality) from the public information they share on Facebook.
The Big Five traits are characterized by the following: Neuroticism, Extraversion, Openness,
Agreeableness, and Conscientiousness. To emphasize the performances of our proposed approach
for predicting personality we have compared it both with a neural method of regression
(like Multilayer Perceptron= MP) and with a non-neural approach Multiple Linear Regression
Model (MLRM). The comparison of FGNN and respectively MP versus MLRM marks both the
competition nonlinear over linear and of neural over statistical, too. To test the performance
of the neuro-fuzzy prediction achieved based on FGNN we shall use the Normalized Root Mean
Square Error (NRMSE). According with the NRMSE criterion, we have achieved that the prediction
with FGNN is better than with others two methods both over the training lot and over the test lot, too.
References:
[1] Iatan, I.F. (2016), Issues in the Use of Neural Networks in Information Retrieval, Springer.
Thursday, March 16, 2017
Theories of Compositionality. Kit Fine's Semantic Relationism
Abstract: The paper is an assessment of compositionality from the vantage point of Kit
Fine’s semantic relationist approach to meaning. This relationist view is deepening our conception
about how the meanings of propositions depend not only on the semantic features and roles of each
separate meaningful unit in a complex but also on the relations that those units hold to each other.
The telling feature of the formal apparatus of this Finean relationist syntax and semantics, viz. the
coordination scheme, has some unexpected consequences that will emerge against the background
of an analogy with the counterpart theoretic semantics for modal languages.
Thursday, March 9, 2017
Approximation algorithms for NP-hard problems
Abstract: Under the strongly believed conjecture that $P \ne NP$, no polynomial
time exact algorithms exist for many important problems with
applications in areas like bioinformatics, network design and string
processing. However, suboptimal solutions (which run in polynomial
time) are most of the times sufficient for practical applications. The
study of approximation algorithms, which is my domain of expertize,
has two major branches: improving the upper bounds (i.e. designing
better approximation algorithms) and proving lower bounds (i.e.
hardness of approximation). Both directions are equally interesting:
the former for providing solutions to NP-hard problems, and the latter
for giving a better understanding on the limits of approximation
algorithms. This talk aims to provide a brief introduction into the
approximation algorithms area and to present two of the NP-hard
problems that I have studied.
Thursday, March 2, 2017
Treewidth vs. Treelength
Abstract: We present new relationships between treewidth and treelength, that are
two parameters providing distinct, yet complementary, pieces of information on the closeness
of a graph to a tree.
Intuitively, treewidth measures how close is the structure of a graph from the structure of
a tree, whereas treelength measures the minimum distortion of the distances in a graph when
it is embedded into a tree. These two parameters have important algorithmic applications.
For instance, there are many Constraint Satisfaction Problems that become tractable if one
of these two above parameters is bounded for the graph of constraints. So, it is interesting
to relate them. On the one hand, cycles (with bounded treewidth but unbounded treelength)
and complete graphs (with bounded treelength but unbounded treewidth) show that in general,
there can be no relationship between the two parameters. On the other hand, our main result
is that removing large cycles and large cliques makes treewidth and treelength comparable.
More precisely, we prove that treelength is linearly upper-bounded by treewidth on classes
of graphs with bounded shortest maximal cycle basis. Conversely, treewidth is linearly
upper-bounded by treelength for all classes of apex-minor free graphs.
This is joint work with David Coudert and Nicolas Nisse.
Thursday, February 23, 2017
Exact and exhaustive Boolean minimization algorithms applied in fuzzy set QCA
Abstract: Unlike the engineering field, where a single minimal solution is sufficient
to end the boolean minimization process, the social sciences require all possible solutions
to a given input data. Current mainstream algorithms applied in engineering (like Espresso,
or newer Scherzo) are known to arrive at a minimal solution using a heuristic approach
(although they claim to be able to generate exact solutions, as well).
Generally, an exact and especially an exhaustive list of solutions have costs in terms of
memory used and computer cycles, but there are alternatives to cut down both and deal with
about 18 inputs at a reasonable speed, using compiled C code.
Thursday, February 16, 2017
Knowability and Fitch's Paradox in Dynamic Epistemic Logic
Abstract: The Verificationist Thesis states that all truths are knowable, or, in other words, that there are no transcendent truths.
The classical translation of this thesis into a modal-epistemic language was proven to be
paradoxical by showing that the modal-epistemic concept of knowability implies a contradiction
(the Church-Fitch Paradox). However, using the language and semantical tools of Dynamic
Epistemic Logic, one can formally define a notion of knowability that does not succumb to paradox.
As such, I will:
(1) offer a short introduction to Epistemic Temporal Logic and Temporal Dynamic Epistemic Logic;
(2) present various formulations of knowability in the language of these Dynamic Epistemic Logics;
(3) present a personal semantical framework (based on Hoshi's 2009 work in Epistemic Temporal Logic)
in which to both state and prove the consistency of Edgington's concept of knowability.
References:
[1] van Ditmarsch, H.P., van der Hoek, W., Kooi, B. (2007) Dynamic Epistemic Logic,
Volume 337 of Synthese Library, Netherlands: Springer.
[2] Hoshi, T. (2009) Epistemic dynamics and protocol information. PhD thesis, Stanford, CA, USA.
[3] Edgington, D. (1985). The Paradox of Knowability. Mind, 94(376).
Thursday, February 9, 2017
An introduction to dynamic epistemic logic
Abstract: The term “Dynamic Epistemic Logic” is ambiguous: it may denote (1) a paradigm
of designing logical systems so as to make them able to represent the evolution of an agent's
(or a group of agents) knowledge representation system under receiving new information, or
(2) one of many logical systems that pertain to this aim. As such, I will present both the
idea underlying the design of such systems and some particular ones: Public Announcement
Logic (with Public Assignments), Conditional Doxastic Logic and Temporal Public Announcement Logic.
References:
[1] van Ditmarsch, H.P., van der Hoek, W., Kooi, B. (2007) Dynamic Epistemic Logic,
Volume 337 of Synthese Library, Netherlands: Springer.
[2] Hoshi, T. (2009) Epistemic dynamics and protocol information. PhD thesis, Stanford, CA, USA.
Thursday, November 28, 2016, 17:00, Hall 202
Yes, No, Perhaps: A logical introduction to quantum computation
Abstract:
Quantum computation has suggested new kinds of logic, which are deeply different both
from Boolean logic (the logical background of classical computation) and from
multi-valued (fuzzy) logics.
The most striking feature of quantum computational logic is the introduction in the
realm of ‘pure logic’ of new and physically motivated connectives (gates) that have neither
a classical nor a fuzzy-like analogue.
In this talk, we will present some of these connectives (in particular, “the square-root of
negation” and “the square-root of the identity“) and we will discuss some of their most ‘funny’
and ‘illogical’ properties.
Thursday, November 24, 2016
Decidability and definability in number theory II
Thursday, November 17, 2016
Decidability and definability in number theory
Abstract:
This is a report on results and discussions in the Oberwolfach workshop "Decidability and definability in number theory".
Thursday, November 10, 2016
QCA - Qualitative Comparative Analysis. An application of boolean algebra and fuzzy sets
Abstract:
Boolean algebra has long been recognised as a branch of mathematics which had significant applications
in engineering, leading to the appearance of the modern computers. Recently, however, it has found
its way to a very interesting application for the comparative analysis in the social sciences,
that starts from a specially formatted (calibrated) data, to find a minimal causal combination
that are necessary and/or sufficient to produce a certain outcome. It does that by using the
famous Quine-McCluskey algorithm, to compare all possible pairs of minimizable cases, over
multiple iterations, until nothing else can further be minimized, the surviving combinations
being called prime implicants. In terms of software implementations there are many products
in various fields, one of those being written in the QCA package in R, that is mathematically
proven to be exact and exhaustive. The usage of fuzzy sets is particularly interesting in this
procedure, both for the calibration of the original variables and also for the overall
significance of the necessity and sufficiency concepts, by calculating the so called inclusion and coverage scores.
Thursday, October 27, 2016
Independence-Friendly logic (IF logic): the main ideas and technical results
Abstract:
IF logic has been introduced by Jaakko Hintikka and Gabriel Sandu in 1989. It is an extension
of ordinary first-order logic with quantifiers of the form '$Qx/Y$' where $Y$ is a finite set
of variables and $Q$ stands for either the existential or the universal quantifier. When $Y$
is empty, we recover the standard quantifiers. The intended interpretation of '$Qx/Y$' is: the
choice of a value for the variable x (from the underlying universe of discourse) is informationally
independent of the choices of values for the variables in the set $Y$. The idea of informational
independence comes from game-theory (games of imperfect information). IF logic has been the motivating
force for several logics of dependence and independence that have flourished recently
(Dependence Logics, Inclusion Logics, Independence Logics, Modal Dependence Logics, etc).
I will present some of the main ideas and some of the technical results, including the probabilistic
interpretation of IF sentences, which connects to many-valued logics.
Thursday, October 20, 2016
A Simple Universe Argument
Abstract:
This paper presents an argument that, with probability 1, the laws that govern a non-designed
universe are infinitely complex at any level of observation. From this it follows that there
is a 0 probability of us living in a non-designed universe.
Thursday, October 5, 2016
Proof mining in $L^p$ spaces
Abstract:
We obtain an equivalent implicit characterization of $L^p$ Banach spaces that is amenable to
a logical treatment. Using that, we obtain an axiomatization for such spaces into a higher-order
logical system, the kind of which is used in proof mining, a research program that aims to
obtain the hidden computational content of mathematical proofs using tools from mathematical
logic. The axiomatization is followed by a corresponding metatheorem in the style of proof mining.
We illustrate its use with an application, namely the derivation of the standard modulus of uniform convexity.